在机器学习中,培训数据通常捕获一些基本人口的多个亚组的行为。当未仔细控制子组的培训数据的性质时,会产生代表性不足的偏差。为了应对这种效果,我们介绍了两个自然的亚组公平性和瞬时公平概念,以解决时间表预测问题的这种不足代表性偏见。在这里,我们使用非交通性多项式优化问题的凸面层次结构显示了全球收敛的方法。我们对由保险应用程序和众所周知的Compas数据集的有偏见数据集的经验结果证明了我们方法的功效。我们还表明,通过利用凸的稀疏性,我们可以大大减少方法的运行时间。
translated by 谷歌翻译
最近,人们对AI的监管产生了很多兴趣。我们主张基于民权立法的观点,该观点是基于平等待遇和同等影响的观念。在AI系统及其用户的闭环视图中,平等的治疗涉及一个通过循环。我们认为,同等影响涉及重复互动之间的长期平均行为。为了确定平均值及其特性的存在,需要研究闭环的厄法德特性及其独特的固定度量。
translated by 谷歌翻译
Langevin-diffusion形式的随机微分方程已获得了最近的重大作用,这要归功于它们在贝叶斯采样算法中的基本作用和在机器学习中的优化。在后者中,它们是训练过度参数化模型中随机梯度流的概念模型。但是,文献通常假定电势的平滑度,其梯度是漂移项。然而,存在许多问题,对于潜在的功能并非不断差异,因此漂移并不是到处都是lipschitz的连续。在回归问题中,可靠的损失和整流的线性单位来说明这一点。在本文中,我们在适合机器学习设置的假设下展示了有关Langevin型随机差异夹杂物的流动和渐近特性的一些基本结果。特别是,我们显示了溶液的强烈存在,以及规范自由能功能的渐近最小化。
translated by 谷歌翻译
设置子模块目标函数的优化问题具有许多现实世界应用。在离散场景中,在可以选择同一项目的情况下,域通过设置到有界整数格的2元素概括。在这项工作中,我们考虑最大化界限整数晶格上的单调子模块功能的问题,受到基数约束。特别是,我们专注于最大化D​​R-SubsoDular函数,即在整数格中定义的函数,该函数展示递减返回属性。给定任何epsilon> 0,我们介绍了一种随机算法的概率保证o(1 - 1 / e-epsilon)近似,使用由Mirzasoleiman等人开发的随机贪婪算法启发的框架。然后,我们表明,在合成DR-IMODOOMULAL功能上,在整数晶格上应用我们的建议算法比替代方案快,包括将目标问题还原到集合域,然后应用于最快的已知的集合子态最大化算法。
translated by 谷歌翻译
We study the problem of graph clustering under a broad class of objectives in which the quality of a cluster is defined based on the ratio between the number of edges in the cluster, and the total weight of vertices in the cluster. We show that our definition is closely related to popular clustering measures, namely normalized associations, which is a dual of the normalized cut objective, and normalized modularity. We give a linear time constant-approximate algorithm for our objective, which implies the first constant-factor approximation algorithms for normalized modularity and normalized associations.
translated by 谷歌翻译
We study the problem of combining neural networks with symbolic reasoning. Recently introduced frameworks for Probabilistic Neurosymbolic Learning (PNL), such as DeepProbLog, perform exponential-time exact inference, limiting the scalability of PNL solutions. We introduce Approximate Neurosymbolic Inference (A-NeSI): a new framework for PNL that uses neural networks for scalable approximate inference. A-NeSI 1) performs approximate inference in polynomial time without changing the semantics of probabilistic logics; 2) is trained using data generated by the background knowledge; 3) can generate symbolic explanations of predictions; and 4) can guarantee the satisfaction of logical constraints at test time, which is vital in safety-critical applications. Our experiments show that A-NeSI is the first end-to-end method to scale the Multi-digit MNISTAdd benchmark to sums of 15 MNIST digits, up from 4 in competing systems. Finally, our experiments show that A-NeSI achieves explainability and safety without a penalty in performance.
translated by 谷歌翻译
This paper presents a conversational AI platform called Flowstorm. Flowstorm is an open-source SaaS project suitable for creating, running, and analyzing conversational applications. Thanks to the fast and fully automated build process, the dialogues created within the platform can be executed in seconds. Furthermore, we propose a novel dialogue architecture that uses a combination of tree structures with generative models. The tree structures are also used for training NLU models suitable for specific dialogue scenarios. However, the generative models are globally used across applications and extend the functionality of the dialogue trees. Moreover, the platform functionality benefits from out-of-the-box components, such as the one responsible for extracting data from utterances or working with crawled data. Additionally, it can be extended using a custom code directly in the platform. One of the essential features of the platform is the possibility to reuse the created assets across applications. There is a library of prepared assets where each developer can contribute. All of the features are available through a user-friendly visual editor.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Chain of thought prompting successfully improves the reasoning capabilities of large language models, achieving state of the art results on a range of datasets. However, these reasoning capabilities only appear to emerge in models with a size of over 100 billion parameters. In this paper, we explore the transfer of such reasoning capabilities to models with less than 100 billion parameters via knowledge distillation. Specifically, we finetune a student model on the chain of thought outputs generated by a larger teacher model. Our experiments show that the proposed method improves task performance across arithmetic, commonsense and symbolic reasoning datasets. For example, the accuracy of T5 XXL on GSM8K improves from 8.11% to 21.99% when finetuned on PaLM-540B generated chains of thought.
translated by 谷歌翻译
In the field of derivative-free optimization, both of its main branches, the deterministic and nature-inspired techniques, experienced in recent years substantial advancement. In this paper, we provide an extensive computational comparison of selected methods from each of these branches. The chosen representatives were either standard and well-utilized methods, or the best-performing methods from recent numerical comparisons. The computational comparison was performed on five different benchmark sets and the results were analyzed in terms of performance, time complexity, and convergence properties of the selected methods. The results showed that, when dealing with situations where the objective function evaluations are relatively cheap, the nature-inspired methods have a significantly better performance than their deterministic counterparts. However, in situations when the function evaluations are costly or otherwise prohibited, the deterministic methods might provide more consistent and overall better results.
translated by 谷歌翻译